Neural Models of Bayesian Belief Propagation

نویسنده

  • Rajesh P. N. Rao
چکیده

Animals are constantly faced with the challenge of interpreting signals from noisy sensors and acting in the face of incomplete knowledge about the environment. A rigorous approach to handling uncertainty is to characterize and process information using probabilities. Having estimates of the probabilities of objects and events allows one to make intelligent decisions in the presence of uncertainty. A prey could decide whether to keep foraging or to flee based on the probability that an observed movement or sound was caused by a predator. Probabilistic estimates are also essential ingredients of more sophisticated decision-making routines such as those based on expected reward or utility. An important component of a probabilistic system is a method for reasoning based on combining prior knowledge about the world with current input data. Such methods are typically based on some form of Bayesian inference, involving the computation of the posterior probability distribution of one or more random variables of interest given input data. In this chapter, we describe how neural circuits could implement a general algorithm for Bayesian inference known as belief propagation. The belief propagation algorithm involves passing " messages " (probabilities) between the nodes of a graphical model that captures the causal structure of the environment. We review the basic notion of graphical models and illustrate the belief propagation algorithm with an example. We investigate potential neural implementations of the algorithm based on networks of leaky integrator neu-rons and describe how such networks can perform sequential and hierarchical Bayesian inference. Simulation results are presented for comparison with neu-robiological data. We conclude the chapter by discussing other recent models of inference in neural circuits and suggest directions for future research. Some of the ideas reviewed in this chapter have appeared in prior publications [30, 31, 32, 42]; these may be consulted for additional details and results not included in this chapter.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Computational Power of Dynamic Bayesian Networks

This paper considers the computational power of constant size, dynamic Bayesian networks. Although discrete dynamic Bayesian networks are no more powerful than hidden Markov models, dynamic Bayesian networks with continuous random variables and discrete children of continuous parents are capable of performing Turing-complete computation. With modified versions of existing algorithms for belief ...

متن کامل

On the relationship between deterministic and probabilistic directed Graphical models: From Bayesian networks to recursive neural networks

Machine learning methods that can handle variable-size structured data such as sequences and graphs include Bayesian networks (BNs) and Recursive Neural Networks (RNNs). In both classes of models, the data is modeled using a set of observed and hidden variables associated with the nodes of a directed acyclic graph. In BNs, the conditional relationships between parent and child variables are pro...

متن کامل

Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology

Graphical models, such as Bayesian networks and Markov random elds represent statistical dependencies of variables by a graph. Local \belief propagation" rules of the sort proposed by Pearl [20] are guaranteed to converge to the correct posterior probabilities in singly connected graphs. Recently good performance has been obtained by using these same rules on graphs with loops, a method known a...

متن کامل

Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology

Graphical models, such as Bayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. Local "belief propagation" rules of the sort proposed by Pearl (1988) are guaranteed to converge to the correct posterior probabilities in singly connected graphs. Recently, good performance has been obtained by using these same rules on graphs with loops, a method w...

متن کامل

Importance sampling algorithms for Bayesian networks: Principles and performance

Precision achieved by stochastic sampling algorithms for Bayesian networks typically deteriorates in the face of extremely unlikely evidence. In addressing this problem, importance sampling algorithms seem to be most successful. We discuss the principles underlying the importance sampling algorithms in Bayesian networks. After that, we describe Evidence Pre-propagation Importance Sampling (EPIS...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006